Don’t Reinvent the Wheel to Govern AI
from Net Politics and Digital and Cyberspace Policy Program

Don’t Reinvent the Wheel to Govern AI

U.S. government agencies are being asked to rapidly scale governance efforts focused on AI. The U.S. government should integrate governance into agencies' existing mandates, rather than trying to create a governance model from scratch.
People walk near a sign for Sapeon, an artificial intelligence (AI) chip company, at the Mobile World Congress (MWC) in Barcelona, Spain on February 27, 2024.
People walk near a sign for Sapeon, an artificial intelligence (AI) chip company, at the Mobile World Congress (MWC) in Barcelona, Spain on February 27, 2024. Bruna Casas/Reuters

Recent advancements in artificial intelligence have captured the public’s attention, and debates about how to govern this transformative technology have preoccupied leaders in Washington, Brussels, Beijing, and beyond. Within the last year, policymakers have pursued a litany of AI governance measures, from the European Union’s sweeping AI Act to state- and city-level initiatives in the United States. These approaches vary widely in their scope and scale, as well as their attitudes toward balancing the often-competing goals of promoting AI safety and preserving innovation incentives.

To date, the U.S. government has yet to release any binding AI regulations—most high-level governance efforts in the United States today focus on controlling the government’s own use of the technology and encouraging AI developers to adopt voluntary compliance measures. Lawmakers on both sides of the aisle have batted around a variety of proposals, with some going as far as pushing for a new agency to manage the development and deployment of AI systems. These plans imply the task of regulating AI is too big and too novel for any existing federal regulator.

More on:

Artificial Intelligence (AI)

Regulation and Deregulation

Cybersecurity

However, this assumption isn’t necessarily true. Despite the hype surrounding the technology, AI systems are just tools that help people and organizations perform tasks, and most of the areas where these tools could be deployed already fall under the jurisdiction of one or more government regulators. As such, agencies may not need new legal powers to regulate AI in many cases—their existing authorities can likely do the trick just fine. Furthermore, given the speed with which AI tools are rolling out across the economy, it will be crucial for agencies to start addressing the risks of AI using the powers they already have on the books.

Using existing regulatory toolkits to manage AI offers many benefits, at least in the short term. First and foremost is speed. AI is evolving rapidly and organizations across the economy are eager to integrate the technology into their products and operations. The risks presented by these AI tools will grow as they become more widely adopted, and the government needs to act quickly if it hopes to stay ahead of the curve. By using existing authorities, agencies can start governing AI today rather than waiting for a hotly divided Congress to enact comprehensive AI legislation or stand up an entirely new regulator.

Additionally, using existing authorities can help regulators address what are likely to be a range of highly sector-specific AI applications. Since AI is a general-purpose technology, policymakers cannot realistically create a set of universally applicable, one-size-fits-all regulations. Effective governance will hinge as much on the details of specific use cases as the nuances of the technology itself. Regulatory agencies already employ experts with deep understandings of the industries, networks, and systems in which AI applications will be deployed. Delegating responsibilities for AI governance across these agencies—rather than consolidating them in a single new entity—will let the government make better use of this in-house expertise. Policymakers may need additional education to regulate AI effectively, of course, but teaching a diverse array of subject-matter experts how AI works is easier than educating AI experts in the nuances of every area where the technology will be applied. 

Perhaps most importantly though, using existing governance structures is also legally feasible. Many regulators have statutory powers that imply jurisdiction over a wide range of technologies, including AI. In the case of the Federal Aviation Administration, for example, the agency is legally responsible for “prescribing minimum standards required in the interest of safety for appliances … aircraft, aircraft engines, and propellers.” This authority applies whether or not AI systems are present on the aircraft. 

As we detail in a new report from Georgetown University’s Center for Security and Emerging Technology, the FAA’s existing authorities are indeed flexible enough to govern AI systems integrated into aircraft onboard systems and air traffic control infrastructure, as well as other parts of the aviation ecosystem. We expect most regulators who undertake this exercise will find they too already have the power to make rules for AI systems in their areas of interest. 

More on:

Artificial Intelligence (AI)

Regulation and Deregulation

Cybersecurity

To be sure, using existing authorities to govern AI will entail challenges. Regulators will need to overhaul software assurance procedures, testing and evaluation standards, and other processes to accommodate AI’s unique challenge. Many agencies will also need to expand their technical workforce in order to develop and implement AI governance regimes. Furthermore, there are likely some AI applications that fall outside the purview of agencies’ existing authorities, and new statutes may be needed to mitigate harms in such cases. While federal policymakers are already working to address some of these issues, it will take time for their efforts to yield results.

Federal agencies are also seeing their ability to govern the private sector contested at a more existential level. It remains unclear how the Supreme Court’s decision to overturn the Chevron doctrine in Loper Bright Enterprises v. Raimondo will impact the legality of different federal regulatory regimes, and certain watchdogs like the FAA have come under fire after high-profile safety mishaps at the companies under their purview. That said, using existing authorities to govern AI may be even more practical in light of these challenges, as it will not require agencies to expend limited resources establishing brand new regulatory processes that may be subject to legal challenges.

Given the election season and generally sluggish pace of Congress, we should not expect to see movement on any comprehensive AI governance packages until at least next year. In the meantime, agencies should start taking stock of their regulatory toolkits and determine where and how their existing powers can apply to AI. Some groups like the U.S. Department of Health and Human Services have already completed such assessments, and it would behoove others to follow their lead. Armed with this knowledge, policymakers can start setting guardrails for the AI systems under their purview and pushing for whatever targeted powers they may need to fill in the gaps. 

Over the years we have entrusted federal agencies with a wide range of powers to promote the health, safety, and prosperity of the American people. The proliferation of AI tools will require regulators to adapt, but it does not mean they need to reinvent the wheel.

 

Jack Corrigan is a senior research analyst at Georgetown University’s Center for Security and Emerging Technology (CSET).

Owen J. Daniels is the Andrew W. Marshall fellow at Georgetown University’s Center for Security and Emerging Technology (CSET).

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail